AIbase
Home
AI Tools
AI Models
MCP
AI NEWS
EN
Model Selection
Tags
MNLI Fine-tuning

# MNLI Fine-tuning

Deberta V2 Xxlarge Mnli
MIT
DeBERTa V2 XXLarge is an enhanced BERT variant based on the disentangled attention mechanism, surpassing RoBERTa and XLNet in natural language understanding tasks with 1.5 billion parameters
Large Language Model Transformers English
D
microsoft
4,077
8
Bert Small Mnli
This model is a PyTorch pretrained model obtained by converting TensorFlow checkpoints from the official Google BERT repository, originating from the paper 'Well-Read Students Learn Better: On the Importance of Pre-training Compact Models,' and trained on the MNLI dataset.
Large Language Model
B
prajjwal1
29
0
ZSD Microsoft V2xxlmnli
MIT
An enhanced BERT decoding model based on the decoupled attention mechanism, a large-scale version fine-tuned on the MNLI task.
Large Language Model Transformers English
Z
NDugar
59
3
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
English简体中文繁體中文にほんご
© 2025AIbase